You have probably heard about code obfuscation and that you should use it to protect your apps – but what exactly is it, and is code obfuscation alone enough?
Short introduction to code obfuscation
Over the past decade, we have seen exponential growth in mobile and app-based cybercrime, with the ever-evolving mobile threat landscape offering plenty of freely available tools for attackers to hook into their targets’ proprietary software and reverse-engineer apps in order to identify weaknesses, secrets and gather highly sensitive information.
Born from a desire to mitigate against such attacks at all costs, code obfuscation has become a standard technique used by developers to prevent cybercriminals from decompiling and reverse-engineering source code, protecting apps against intellectual property theft.
What is code obfuscation and how does it work?
At its core, obfuscation describes the act of obscuring or making something harder to understand. Thus, code obfuscation is a method of modifying an app’s code to make it difficult for attackers to read or comprehend. While the functionality of the code remains the same, obfuscation helps to conceal the logic and purpose of an app’s code.
The process consists of some simple but reliable techniques and, when used together, can build a strong layer of defense in order to protect an app’s source code from attackers. The classification of obfuscation techniques depends on the information they target. Some transformations target the lexical structure of the software, while others target the control flow.
Some examples include simply renaming functions, methods, classes in order to use less descriptive names. Additional techniques include removing debug information, such as parameter type, source file and line number, as well as removing java annotations.
Promon’s In-App Protection software, Promon SHIELD™ obfuscates parts or all of an app’s code, making it significantly more difficult for an attacker to analyse.
Why you should obfuscate both native and non-native apps
JavaScript
Developing a single hybrid app is quicker and may be more cost-effective than developing native Android and iOS individually. However, hybrid apps can be more vulnerable to attacks than apps written using native languages because JavaScript is easier to reverse engineer and modify as it is not compiled into a more abstract representation in the published app.
iOS
Objective-C and Swift are the most common programming languages for iOS apps. Both are compiled to machine code, which makes it more difficult to translate the code back to the original source code. This has created a misconception that iOS apps are hard to reverse engineer. However, the interest in analyzing and understanding machine code is nothing new, and there is a mature technology in place for reverse engineering machine code, based on years of research and expertise in the field.
Android
The Android operating system is hugely popular, and developers are constantly building new apps designed to run on the system. Generally speaking, all mobile code is prone to reverse engineering – but code written in languages that allow dynamic introspection at runtime, such as Java, are particularly at risk.
Learn more about code obfuscation – download our guide
Is obfuscation enough?
The question remains, is code obfuscation the “silver bullet” in mobile app security? Well, code obfuscation, while very effective, presents nothing more than a ‘speed bump’, forcing an attacker to spend a lot more valuable time and effort reverse-engineering an app to visualize and understand its logic. It does not, however, prevent against malware or the presence of debuggers and emulators, for example.
This is particularly important as debuggers and emulators are frequently utilized tools that attackers use when attempting to reverse engineer an app. Such tools can be used to analyze an app to determine how it works and to extract sensitive information.
Attackers may also attempt to reverse engineer an app during runtime by injecting code into the app as a means of controlling it from within. There are many well-documented tools such as Frida and Xposed, which automate these processes.
Obfuscation, while useful, cannot do much to protect against these techniques.
Malware doesn’t care
Ultimately, code obfuscation alone is not enough to handle complex mobile security threats. Although it makes it more difficult to read and understand an app’s code, the availability of automated tools, when combined with hackers’ expertise, does not make it impossible to reverse-engineer.
Sophisticated mobile malware exploits various security vulnerabilities in the mobile OS and uses diverse techniques to achieve their key goals. Malware can, for example, misuse the Android Accessibility APIs in order to attack an app. These APIs are intended to help disabled people use their smartphones, however, they also open up the door for malware developers to create sneaky mobile trojans. These exploits enable mobile malware to act as a screen reader, for example, meaning that it can interactively and remotely control the device and, in doing so, any app it desires.
Some recent examples that highlight this fact are the infamous StrandHogg and StrandHogg 2.0 vulnerabilities, which are Android vulnerabilities that enable mobile malware to masquerade as legitimate apps, all the while enabling attackers to steal information such as passwords and other sensitive app data.
The optimal solution: Combine code obfuscation with runtime protection
The mobile threat landscape is getting bigger, busier and more complex. A security breach can kill both a developer’s app and reputation. In addition to potentially committing financial fraud, user data and sensitive information can also be stolen, putting businesses at risk of regulatory compliance violations, not to mention bad publicity. Many may also lose the trust of shareholders and customers, potentially causing irreparable harm to their brand.
It is undeniable that code obfuscation makes businesses less prone to licencing fraud, reverse engineering and intellectual property theft. However, it should not be relied upon solely as it does not protect apps from malware or real-world attack scenarios.
Developers should strongly consider utilising code obfuscation in combination with a multi-layered runtime app protection solution, particularly if those apps run in an untrusted environment (Operating System).
In order to be truly protected, an app should be able to detect the presence of code hooks, as well as code injection frameworks and also block injection of malicious code into the app process.
App providers must consider adopting In-App Protection software, which will enable their apps to detect instances where an untrusted screen reader is active and block them from receiving data from a protected app, adding additional layers of anti-tampering controls.
This can also include adding functionalities that check for the presence of debuggers or emulators, in addition to checks for jailbreak and rooting. Having security controls in place which allow an app to detect if it’s executed in an emulator or typical virtual environment adds an additional layer of complexity for an attacker, further ensuring protection from reverse-engineering attacks.
Code obfuscation is like hiding a safe behind a painting. If you have a secure enough lock, it shouldn’t matter who can see it.
Read more
eGovernment apps are on the rise – but how secure are they? We assessed the top Android and iOS eGov apps in the APAC region, and the results are worrying. 50% of the apps we analysed do not use code obfuscation. Our report shows that major government apps in Asia leak sensitive data and lack basic security.